Karush-Kuhn-Tucker Optimality Based Local Search for Enhanced Convergence of Evolutionary Multi-Criterion Optimization Methods
نویسندگان
چکیده
Recent studies have used Karush-Kuhn-Tucker (KKT) optimality conditions to develop a KKT ProximityMeasure (KKTPM) for terminating amulti-objective optimization simulation run based on theoretical convergence of solutions. In addition to determining a suitable termination condition and due to their ability to provide a single measure for convergence to Pareto-optimal solutions, the developed KKTPM can also be applied more directly to enhance the performance of an optimization algorithm. In this paper, we integrate the KKTPM information with an evolutionary multi-objective optimization (EMO) algorithm to enhance its convergence properties to Pareto-optimal solutions. Specifically, we use KKTPM to identify poorly converged nondominated solutions in every generation and apply an achievement scalarizing function based local search procedure to improve their convergence. Simulations on both constrained and unconstrained multiple and many-objective optimization problems demonstrate that the hybrid algorithm significantly improves the overall convergence properties. This study brings evolutionary optimization closer to mainstream optimization field and should motivate researchers to utilize KKTPM measure further within EMO and other numerical optimization algorithms for improving their working behaviors.
منابع مشابه
Sequential Optimality Conditions and Variational Inequalities
In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...
متن کاملOn Sequential Optimality Conditions without Constraint Qualifications for Nonlinear Programming with Nonsmooth Convex Objective Functions
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Here, nonsmooth approximate gradient projection and complementary approximate Karush-Kuhn-Tucker conditions are presented. These sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constrai...
متن کاملTransposition Theorems and Qualification-Free Optimality Conditions
New theorems of the alternative for polynomial constraints (based on the Positivstellensatz from real algebraic geometry) and for linear constraints (generalizing the transposition theorems of Motzkin and Tucker) are proved. Based on these, two Karush-John optimality conditions – holding without any constraint qualification – are proved for singleor multi-objective constrained optimization prob...
متن کاملNon-Lipschitz Semi-Infinite Optimization Problems Involving Local Cone Approximation
In this paper we study the nonsmooth semi-infinite programming problem with inequality constraints. First, we consider the notions of local cone approximation $Lambda$ and $Lambda$-subdifferential. Then, we derive the Karush-Kuhn-Tucker optimality conditions under the Abadie and the Guignard constraint qualifications.
متن کاملOptimization Tutorial 2 : Newton ’ s Method , Karush - Kuhn - Tucker ( KKT ) Conditions 3 3 Constrained Optimization and KKT Optimality Conditions
In the first part of the tutorial, we introduced the problem of unconstrained optimization, provided necessary and sufficient conditions for optimality of a solution to this problem, and described the gradient descent method for finding a (locally) optimal solution to a given unconstrained optimization problem. We now describe another method for unconstrained optimization, namely Newton’s metho...
متن کامل